深度学习(DL)模型为各种医学成像基准挑战提供了最先进的性能,包括脑肿瘤细分(BRATS)挑战。然而,局灶性病理多隔室分割(例如,肿瘤和病变子区)的任务特别具有挑战性,并且潜在的错误阻碍DL模型转化为临床工作流程。量化不确定形式的DL模型预测的可靠性,可以实现最不确定的地区的临床审查,从而建立信任并铺平临床翻译。最近,已经引入了许多不确定性估计方法,用于DL医学图像分割任务。开发指标评估和比较不确定性措施的表现将有助于最终用户制定更明智的决策。在本研究中,我们探索并评估在Brats 2019-2020任务期间开发的公制,以对不确定量化量化(Qu-Brats),并旨在评估和排列脑肿瘤多隔室分割的不确定性估计。该公制(1)奖励不确定性估计,对正确断言产生高置信度,以及在不正确的断言处分配低置信水平的估计数,(2)惩罚导致更高百分比的无关正确断言百分比的不确定性措施。我们进一步基准测试由14个独立参与的Qu-Brats 2020的分割不确定性,所有这些都参与了主要的Brats细分任务。总体而言,我们的研究结果证实了不确定性估计提供了分割算法的重要性和互补价值,因此突出了医学图像分析中不确定性量化的需求。我们的评估代码在HTTPS://github.com/ragmeh11/qu-brats公开提供。
translated by 谷歌翻译
Heating in private households is a major contributor to the emissions generated today. Heat pumps are a promising alternative for heat generation and are a key technology in achieving our goals of the German energy transformation and to become less dependent on fossil fuels. Today, the majority of heat pumps in the field are controlled by a simple heating curve, which is a naive mapping of the current outdoor temperature to a control action. A more advanced control approach is model predictive control (MPC) which was applied in multiple research works to heat pump control. However, MPC is heavily dependent on the building model, which has several disadvantages. Motivated by this and by recent breakthroughs in the field, this work applies deep reinforcement learning (DRL) to heat pump control in a simulated environment. Through a comparison to MPC, it could be shown that it is possible to apply DRL in a model-free manner to achieve MPC-like performance. This work extends other works which have already applied DRL to building heating operation by performing an in-depth analysis of the learned control strategies and by giving a detailed comparison of the two state-of-the-art control methods.
translated by 谷歌翻译
Language models have been shown to perform better with an increase in scale on a wide variety of tasks via the in-context learning paradigm. In this paper, we investigate the hypothesis that the ability of a large language model to in-context learn-perform a task is not uniformly spread across all of its underlying components. Using a 66 billion parameter language model (OPT-66B) across a diverse set of 14 downstream tasks, we find this is indeed the case: $\sim$70% of attention heads and $\sim$20% of feed forward networks can be removed with minimal decline in task performance. We find substantial overlap in the set of attention heads (un)important for in-context learning across tasks and number of in-context examples. We also address our hypothesis through a task-agnostic lens, finding that a small set of attention heads in OPT-66B score highly on their ability to perform primitive induction operations associated with in-context learning, namely, prefix matching and copying. These induction heads overlap with task-specific important heads, suggesting that induction heads are among the heads capable of more sophisticated behaviors associated with in-context learning. Overall, our study provides several insights that indicate large language models may be under-trained to perform in-context learning and opens up questions on how to pre-train language models to more effectively perform in-context learning.
translated by 谷歌翻译
Knowledge about outcomes is critical for complex event understanding but is hard to acquire. We show that by pre-identifying a participant in a complex event, crowd workers are able to (1) infer the collective impact of salient events that make up the situation, (2) annotate the volitional engagement of participants in causing the situation, and (3) ground the outcome of the situation in state changes of the participants. By creating a multi-step interface and a careful quality control strategy, we collect a high quality annotated dataset of 8K short newswire narratives and ROCStories with high inter-annotator agreement (0.74-0.96 weighted Fleiss Kappa). Our dataset, POQue (Participant Outcome Questions), enables the exploration and development of models that address multiple aspects of semantic understanding. Experimentally, we show that current language models lag behind human performance in subtle ways through our task formulations that target abstract and specific comprehension of a complex event, its outcome, and a participant's influence over the event culmination.
translated by 谷歌翻译
End-to-end speech recognition models trained using joint Connectionist Temporal Classification (CTC)-Attention loss have gained popularity recently. In these models, a non-autoregressive CTC decoder is often used at inference time due to its speed and simplicity. However, such models are hard to personalize because of their conditional independence assumption that prevents output tokens from previous time steps to influence future predictions. To tackle this, we propose a novel two-way approach that first biases the encoder with attention over a predefined list of rare long-tail and out-of-vocabulary (OOV) words and then uses dynamic boosting and phone alignment network during decoding to further bias the subword predictions. We evaluate our approach on open-source VoxPopuli and in-house medical datasets to showcase a 60% improvement in F1 score on domain-specific rare words over a strong CTC baseline.
translated by 谷歌翻译
单词错误率(WER)是用于评估自动语音识别(ASR)模型质量的主要度量。已经表明,与典型的英语说话者相比,ASR模型的语音障碍者的扬声器往往更高。在如此高的错误率下,很难确定模型是否可以很有用。这项研究调查了BertScore的使用,BertScore是文本生成的评估指标,以提供对ASR模型质量和实用性的更有信息度量。将Bertscore和WER与语言病理学家手动注释以进行错误类型和评估手动注释的预测错误。发现Bertscore与人类的误差类型和评估评估更相关。在保留含义的拼字法变化(收缩和归一化误差)上,Bertscore特别强大。此外,使用顺序逻辑回归和Akaike的信息标准(AIC)测量,Bertscore比WER更好地评估了错误评估。总体而言,我们的发现表明,从实际角度评估ASR模型性能时,Bertscore可以补充,尤其是对于可访问性应用程序,即使模型的精度也比典型语音较低的模型也很有用。
translated by 谷歌翻译
人工智能(AI),机器学习和深度学习(DL)方法在生物医学图像分析领域变得越来越重要。但是,为了利用此类方法的全部潜力,需要作为训练数据代表数量的实验获得的图像,其中包含大量手动注释对象。在这里,我们将语法(合成数据)介绍为一种新的方法,用于生成合成,光现实和高度复杂的生物医学图像作为DL系统的训练数据。我们在组织学切片中的肌肉纤维和结缔组织分析的背景下显示了方法的多功能性。我们证明,可以在以前看不见的现实世界数据上执行强大和专家级的细分任务,而无需仅使用合成训练数据进行手动注释。作为一种完全参数技术,我们的方法为生成对抗网络(GAN)构成了可解释的可控替代方案,并且有可能在显微镜及其他地区的各种生物医学应用中显着加速定量图像分析。
translated by 谷歌翻译
现代语言模型中的检测和缓解有害偏见被广泛认为是至关重要的开放问题。在本文中,我们退后一步,研究语言模型首先是如何偏见的。我们使用在英语Wikipedia语料库中训练的LSTM架构,使用相对较小的语言模型。在培训期间的每一步中,在每个步骤中都会更改数据和模型参数,我们可以详细介绍性别表示形式的发展,数据集中的哪些模式驱动器以及模型的内部状态如何与偏差相关在下游任务(语义文本相似性)中。我们发现性别的表示是动态的,并在训练过程中确定了不同的阶段。此外,我们表明,性别信息在模型的输入嵌入中越来越多地表示,因此,对这些性别的态度可以有效地减少下游偏置。监测训练动力学,使我们能够检测出在输入嵌入中如何表示男性和男性性别的不对称性。这很重要,因为这可能会导致幼稚的缓解策略引入新的不良偏见。我们更普遍地讨论了发现与缓解策略的相关性,以及将我们的方法推广到更大语言模型,变压器体系结构,其他语言和其他不良偏见的前景。
translated by 谷歌翻译
开发对手挑战NLP系统的方法是提高模型性能和解释性的有前途的途径。在这里,我们描述了团队在第一个动态对抗数据收集(DADC)的任务1中“长角牛”的方法,该研讨会要求团队手动欺骗一个模型,以挖掘出挖掘的问题回答任务。我们的团队首先结束,模型错误率为62%。我们主张采用系统的,语言知情的方法来制定对抗性问题,并描述了试点实验的结果以及我们的官方提交。
translated by 谷歌翻译
尽管受到监督的深度学习彻底改变了语音和音频处理,但它必须为个人任务和应用程序方案建立专业模型。同样,很难将其应用于仅可用标记数据的方言和语言。自我监督的代表学习方法承诺一个单一的通用模型,该模型将使各种各样的任务和领域受益。这种方法已显示出在自然语言处理和计算机视觉域中的成功,在减少许多下游场景所需的标签数量的同时,达到了新的性能水平。语音表示学习在三个主要类别中也经历了类似的进展:生成,对比和预测方法。其他方法依赖于多模式数据,用于预训练,将文本或视觉数据流与语音混合。尽管自我监督的语音表示仍然是一个新生的研究领域,但它与用零词汇资源的声学单词嵌入和学习密切相关,这两种资源已经进行了多年的积极研究。这篇评论介绍了自我监督的语音表示学习及其与其他研究领域的联系的方法。由于许多当前的方法仅集中在自动语音识别作为下游任务上,因此我们回顾了基准测试的最新努力,以将应用程序扩展到语音识别之外。
translated by 谷歌翻译